Publisher: | Tordotcom |
Copyright: | 2022 |
ISBN: | 1-250-21097-6 |
Format: | Kindle |
Pages: | 340 |
AMD Issues It s just been couple of hard weeks apparently for AMD. The first has been the TPM (Trusted Platform Module) issue that was shown by couple of security researchers. From what is known, apparently with $200 worth of tools and with sometime you can hack into somebody machine if you have physical access. Ironically, MS made a huge show about TPM and also made it sort of a requirement if a person wanted to have Windows 11. I remember Matthew Garett sharing about TPM and issues with Lenovo laptops. While AMD has acknowledged the issue, its response has been somewhat wishy-washy. But this is not the only issue that has been plaguing AMD. There have been reports of AMD chips literally exploding and again AMD issuing a somewhat wishy-washy response. Asus though made some changes but is it for Zen4 or only 5 parts, not known. Most people are expecting a recession in I.T. hardware this year as well as next year due to high prices. No idea if things will change, if ever
netsh interface ip show addressesHere s my output:
PS C:\Users\cjcol> netsh interface ip show addresses "Wi-Fi" Configuration for interface "Wi-Fi" DHCP enabled: Yes IP Address: 172.16.79.53 Subnet Prefix: 172.16.79.0/24 (mask 255.255.255.0) Default Gateway: 172.16.79.1 Gateway Metric: 0 InterfaceMetric: 50Did you follow the instructions linked above in the prerequisites section? If not, take a moment to do so now.
/
and /home
(I kept /boot
as ext4). The thought process was that this was a local machine (so easy access if it all went wrong) and I take regular backups (so if it all went wrong I could recover). That was a year and a half ago and it s been pretty dull; I mostly forget I m running btrfs instead of ext4. This is on a machine that tracks Debian testing, so currently on kernel 6.1 but originally installed with 5.10. So it seems modern btrfs is reasonably stable for a machine that isn t driven especially hard. Good start.
The fact I forget what filesystem I m running points to the fact that I m not actually doing anything special here. I get the advantage of data checksumming, but not much else. 2 things spring to mind. Firstly, I don t do snapshots. Given I run testing it might be wiser if I did take a snapshot before every apt-get upgrade
, and I have a friend who does just that, but even when I ve run unstable I ve never had a machine get itself into a state that I couldn t recover so I haven t spent time investigating. I note Ubuntu has apt-btrfs-snapshot but it doesn t seem to have any updates for years.
The other thing I didn t do when I installed my desktop is take advantage of subvolumes. I m still trying to get my head around exactly what I want them for, but they provide a partial replacement for LVM when it comes to carving up disk space. Instead of the separate /
and /home
LVs I created I could have created a single LV that would have a single btrfs filesystem on it. /
and /home
would then be separate subvolumes, allowing me to snapshot each individually. Quotas can also be applied separately so there s still the potential to prevent one subvolume taking all available space.
Encouraged by the lack of hassle with my desktop I decided to try moving my sbuild machine over to use btrfs for its build chroots. For Reasons this is a VM kindly hosted by a friend, rather than something local. To be honest these days I would probably go for local hosting, but it works and there s no strong reason to move. The point is it s remote, and so if migrating went wrong and I had to ask for assistance I d be bothering someone who s doing me a favour as it is.
The build VM is, of course, running LVM, and there was luckily some free space available. I m reasonably sure the underlying storage involves spinning rust, so I did a laborious set of pvmove
commands to make sure all the available space was at the start of the PV, and created a new btrfs volume there. I was advised that while btrfs-convert would do the job it was better to create a fresh filesystem where possible. This time I did create an initial root
subvolume.
Configuring up sbuild was then much simpler than I d expected. My setup originally started out as a set of tarballs for the chroots that would get untarred + used for the builds, which is pretty slow. Once overlayfs was mature enough I switched to that. I d had a conversation with Enrico about his nspawn/btrfs setup, but it turned out Russ Allbery had written an excellent set of instructions on sbuild with btrfs. I tweaked my existing setup based on his details, and I was in business. Each chroot is a separate subvolume - I don t actually end up having to mount them individually, but it means that only the chroot in use gets snapshotted. For example during a build the following can be observed:
# btrfs subvolume list /
ID 257 gen 111534 top level 5 path root
ID 271 gen 111525 top level 257 path srv/chroot/unstable-amd64-sbuild
ID 275 gen 27873 top level 257 path srv/chroot/bullseye-amd64-sbuild
ID 276 gen 27873 top level 257 path srv/chroot/buster-amd64-sbuild
ID 343 gen 111533 top level 257 path srv/chroot/snapshots/unstable-amd64-sbuild-328059a0-e74b-4d9f-be70-24b59ccba121
257
rather than 271
, but digging further with btrfs subvolume show
on the 2 mounted directories correctly showed the snapshot had a parent equal to the chroot, not /
.
As a final step I ran jdupes via jdupes -1Br /
to deduplicate things across the filesystem. It didn t end up providing a significant saving unfortunately - I guess there s a reasonable amount of change between Debian releases - but I think tried it on my desktop, which tends to have a large number of similar source trees checked out. There I managed to save about 5% on /home
, which didn t seem too shabby.
The sbuild setup has been in place for a couple of months now, and I ve run quite a few builds on it while preparing for the freeze. So I m fairly confident in the stability of the setup and my next move is to transition my local house server over to btrfs for its containers (which all run under systemd-nspawn
). Those are generally running a Debian stable base so there should be a decent amount of commonality for deduping.
I m not saying I m yet at the point where I ll default to btrfs on new installs, but I m definitely looking at it for situations where I think I can get benefits from deduplication, or being able to divide up disk space without hard partitioning space.
(And, just to answer the worry I had when I started, I ve got nowhere near ENOSPC
problems, but I believe they re handled much more gracefully these days. And my experience of ZFS when it got above 90% utilization was far from ideal too.)
#!/bin/bash
# Either rb3011 (arm) or rb5009 (arm64)
#HOSTNAME="rb3011"
HOSTNAME="rb5009"
if [ "x$ HOSTNAME " == "xrb3011" ]; then
ARCH=armhf
elif [ "x$ HOSTNAME " == "xrb5009" ]; then
ARCH=arm64
else
echo "Unknown host: $ HOSTNAME "
exit 1
fi
BASE_DIR=$(dirname $0)
IMAGE_FILE=$(mktemp --tmpdir router.$ ARCH .XXXXXXXXXX.img)
MOUNT_POINT=$(mktemp -p /mnt -d router.$ ARCH .XXXXXXXXXX)
# Build and mount an ext4 image file to put the root file system in
dd if=/dev/zero bs=1 count=0 seek=1G of=$ IMAGE_FILE
mkfs -t ext4 $ IMAGE_FILE
mount -o loop $ IMAGE_FILE $ MOUNT_POINT
# Add dpkg excludes
mkdir -p $ MOUNT_POINT /etc/dpkg/dpkg.cfg.d/
cat <<EOF > $ MOUNT_POINT /etc/dpkg/dpkg.cfg.d/path-excludes
# Exclude docs
path-exclude=/usr/share/doc/*
# Only locale we want is English
path-exclude=/usr/share/locale/*
path-include=/usr/share/locale/en*/*
path-include=/usr/share/locale/locale.alias
# No man pages
path-exclude=/usr/share/man/*
EOF
# Setup fstab + mtab
echo "# Empty fstab as root is pre-mounted" > $ MOUNT_POINT /etc/fstab
ln -s ../proc/self/mounts $ MOUNT_POINT /etc/mtab
# Setup hostname
echo $ HOSTNAME > $ MOUNT_POINT /etc/hostname
# Add the root SSH keys
mkdir -p $ MOUNT_POINT /root/.ssh/
cat <<EOF > $ MOUNT_POINT /root/.ssh/authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAv8NkUeVdsVdegS+JT9qwFwiHEgcC9sBwnv6RjpH6I4d3im4LOaPOatzneMTZlH8Gird+H4nzluciBr63hxmcFjZVW7dl6mxlNX2t/wKvV0loxtEmHMoI7VMCnrWD0PyvwJ8qqNu9cANoYriZRhRCsBi27qPNvI741zEpXN8QQs7D3sfe4GSft9yQplfJkSldN+2qJHvd0AHKxRdD+XTxv1Ot26+ZoF3MJ9MqtK+FS+fD9/ESLxMlOpHD7ltvCRol3u7YoaUo2HJ+u31l0uwPZTqkPNS9fkmeCYEE0oXlwvUTLIbMnLbc7NKiLgniG8XaT0RYHtOnoc2l2UnTvH5qsQ== noodles@earth.li
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDQb9+qFemcwKhey3+eTh5lxp+3sgZXW2HQQEZMt9hPvVXk+MiiNMx9WUzxPJnwXqlmmVdKsq+AvjA0i505Pp8fIj5DdUBpSqpLghmzpnGuob7SSwXYj+352hjD52UC4S0KMKbIaUpklADgsCbtzhYYc4WoO8F7kK63tS5qa1XSZwwRwPbYOWBcNocfr9oXCVWD9ismO8Y0l75G6EyW8UmwYAohDaV83pvJxQerYyYXBGZGY8FNjqVoOGMRBTUcLj/QTo0CDQvMtsEoWeCd0xKLZ3gjiH3UrknkaPra557/TWymQ8Oh15aPFTr5FvKgAlmZaaM0tP71SOGmx7GpCsP4jZD1Xj/7QMTAkLXb+Ou6yUOVM9J4qebdnmF2RGbf1bwo7xSIX6gAYaYgdnppuxqZX1wyAy+A2Hie4tUjMHKJ6OoFwBsV1sl+3FobrPn6IuulRCzsq2aLqLey+PHxuNAYdSKo7nIDB3qCCPwHlDK52WooSuuMidX4ujTUw7LDTia9FxAawudblxbrvfTbg3DsiDBAOAIdBV37HOAKu3VmvYSPyqT80DEy8KFmUpCEau59DID9VERkG6PWPVMiQnqgW2Agn1miOBZeIQV8PFjenAySxjzrNfb4VY/i/kK9nIhXn92CAu4nl6D+VUlw+IpQ8PZlWlvVxAtLonpjxr9OTw== noodles@yubikey
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8UHj4IpfqUcGE4cTvLB0d2xmATSUzqtxW6ZhGbZxvQDKJesVW6HunrJ4NFTQuQJYgOXY/o82qBpkEKqaJMEFHTCjcaj3M6DIaxpiRfQfs0nhtzDB6zPiZn9Suxb0s5Qr4sTWd6iI9da72z3hp9QHNAu4vpa4MSNE+al3UfUisUf4l8TaBYKwQcduCE0z2n2FTi3QzmlkOgH4MgyqBBEaqx1tq7Zcln0P0TYZXFtrxVyoqBBIoIEqYxmFIQP887W50wQka95dBGqjtV+d8IbrQ4pB55qTxMd91L+F8n8A6nhQe7DckjS0Xdla52b9RXNXoobhtvx9K2prisagsHT noodles@cup
ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK6iGog3WbNhrmrkglNjVO8/B6m7mN6q1tMm1sXjLxQa+F86ETTLiXNeFQVKCHYrk8f7hK0d2uxwgj6Ixy9k0Cw= noodles@sevai
EOF
# Bootstrap our install
debootstrap \
--arch=$ ARCH \
--include=collectd-core,conntrack,dnsmasq,ethtool,iperf3,kexec-tools,mosquitto,mtd-utils,mtr-tiny,ppp,tcpdump,rng-tools5,ssh,watchdog,wget \
--exclude=dmidecode,isc-dhcp-client,isc-dhcp-common,makedev,nano \
bullseye $ MOUNT_POINT https://deb.debian.org/debian/
debootstrap
step, including a bunch of extra packages that we want.
# Install mqtt-arp
cp $ BASE_DIR /debs/mqtt-arp_1_$ ARCH .deb $ MOUNT_POINT /tmp
chroot $ MOUNT_POINT dpkg -i /tmp/mqtt-arp_1_$ ARCH .deb
rm $ MOUNT_POINT /tmp/mqtt-arp_1_$ ARCH .deb
# Frob the mqtt-arp config so it starts after mosquitto
sed -i -e 's/After=.*/After=mosquitto.service/' $ MOUNT_POINT /lib/systemd/system/mqtt-arp.service
# Frob watchdog so it starts earlier than multi-user
sed -i -e 's/After=.*/After=basic.target/' $ MOUNT_POINT /lib/systemd/system/watchdog.service
# Make sure the watchdog is poking the device file
sed -i -e 's/^#watchdog-device/watchdog-device/' $ MOUNT_POINT /etc/watchdog.conf
# Clean up docs + locales
rm -r $ MOUNT_POINT /usr/share/doc/*
rm -r $ MOUNT_POINT /usr/share/man/*
for dir in $ MOUNT_POINT /usr/share/locale/*/; do
if [ "$ dir " != "$ MOUNT_POINT /usr/share/locale/en/" ]; then
rm -r $ dir
fi
done
# Set root password to root
echo "root:root" chroot $ MOUNT_POINT chpasswd
# Add security to sources.list + update
echo "deb https://security.debian.org/debian-security bullseye-security main" >> $ MOUNT_POINT /etc/apt/sources.list
chroot $ MOUNT_POINT apt update
chroot $ MOUNT_POINT apt -y full-upgrade
chroot $ MOUNT_POINT apt clean
# Cleanup the APT lists
rm $ MOUNT_POINT /var/lib/apt/lists/www.*
rm $ MOUNT_POINT /var/lib/apt/lists/security.*
# Disable the daily APT timer
rm $ MOUNT_POINT /etc/systemd/system/timers.target.wants/apt-daily.timer
# Disable daily dpkg backup
cat <<EOF > $ MOUNT_POINT /etc/cron.daily/dpkg
#!/bin/sh
# Don't do the daily dpkg backup
exit 0
EOF
# We don't want a persistent systemd journal
rmdir $ MOUNT_POINT /var/log/journal
# Enable nftables
ln -s /lib/systemd/system/nftables.service \
$ MOUNT_POINT /etc/systemd/system/sysinit.target.wants/nftables.service
# Add systemd-coredump + systemd-timesync user / group
echo "systemd-timesync:x:998:" >> $ MOUNT_POINT /etc/group
echo "systemd-coredump:x:999:" >> $ MOUNT_POINT /etc/group
echo "systemd-timesync:!*::" >> $ MOUNT_POINT /etc/gshadow
echo "systemd-coredump:!*::" >> $ MOUNT_POINT /etc/gshadow
echo "systemd-timesync:x:998:998:systemd Time Synchronization:/:/usr/sbin/nologin" >> $ MOUNT_POINT /etc/passwd
echo "systemd-coredump:x:999:999:systemd Core Dumper:/:/usr/sbin/nologin" >> $ MOUNT_POINT /etc/passwd
echo "systemd-timesync:!*:47358::::::" >> $ MOUNT_POINT /etc/shadow
echo "systemd-coredump:!*:47358::::::" >> $ MOUNT_POINT /etc/shadow
# Create /etc/.pwd.lock, otherwise it'll end up in the overlay
touch $ MOUNT_POINT /etc/.pwd.lock
chmod 600 $ MOUNT_POINT /etc/.pwd.lock
# Copy config files
cp --recursive --preserve=mode,timestamps $ BASE_DIR /etc/* $ MOUNT_POINT /etc/
cp --recursive --preserve=mode,timestamps $ BASE_DIR /etc-$ ARCH /* $ MOUNT_POINT /etc/
chroot $ MOUNT_POINT chown mosquitto /etc/mosquitto/mosquitto.users
chroot $ MOUNT_POINT chown mosquitto /etc/ssl/mqtt.home.key
# Build symlinks into flash for boot / modules
ln -s /mnt/flash/lib/modules $ MOUNT_POINT /lib/modules
rmdir $ MOUNT_POINT /boot
ln -s /mnt/flash/boot $ MOUNT_POINT /boot
# Put our git revision into os-release
echo -n "GIT_VERSION=" >> $ MOUNT_POINT /etc/os-release
(cd $ BASE_DIR ; git describe --tags) >> $ MOUNT_POINT /etc/os-release
# Add some stuff to root's .bashrc
cat << EOF >> $ MOUNT_POINT /root/.bashrc
alias ls='ls -F --color=auto'
eval "\$(dircolors)"
case "\$TERM" in
xterm* rxvt*)
PS1="\\[\\e]0;\\u@\\h: \\w\a\\]\$PS1"
;;
*)
;;
esac
EOF
# Build the squashfs
mksquashfs $ MOUNT_POINT /tmp/router.$ ARCH .squashfs \
-comp xz
# Save the installed package list off
chroot $ MOUNT_POINT dpkg --get-selections > /tmp/wip-installed-packages
/etc
, shared across both routers are the following:
apt/apt.conf.d/10periodic
, apt/apt.conf.d/local-recommends
default/locale
dnsmasq.conf
, dnsmasq.d/dhcp-ranges
, dnsmasq.d/static-ips
hosts
, resolv.conf
sysctl.conf
logrotate.conf
, rsyslog.conf
mosquitto/mosquitto.users
, mosquitto/conf.d/ssl.conf
, mosquitto/conf.d/users.conf
, mosquitto/mosquitto.acl
, mosquitto/mosquitto.conf
mqtt-arp.conf
ssl/lets-encrypt-r3.crt
, ssl/mqtt.home.key
, ssl/mqtt.home.crt
ppp/ip-up.d/0000usepeerdns
, ppp/ipv6-up.d/defaultroute
, ppp/pap-secrets
, ppp/chap-secrets
network/interfaces.d/pppoe-wan
nftables.conf
dnsmasq.d/interfaces
network/interfaces.d/eth0
, network/interfaces.d/p1
, network/interfaces.d/p2
, network/interfaces.d/p7
, network/interfaces.d/p8
ppp/peers/aquiss
ssh/ssh_host_ecdsa_key
, ssh/ssh_host_ed25519_key
, ssh/ssh_host_rsa_key
, ssh/ssh_host_ecdsa_key.pub
, ssh/ssh_host_ed25519_key.pub
, ssh/ssh_host_rsa_key.pub
collectd/collectd.conf
, collectd/collectd.conf.d/network.conf
Barbie No, seriously! If anyone can make a good film about a doll franchise, it's probably Greta Gerwig. Not only was Little Women (2019) more than admirable, the same could be definitely said for Lady Bird (2017). More importantly, I can't help feel she was the real 'Driver' behind Frances Ha (2012), one of the better modern takes on Claudia Weill's revelatory Girlfriends (1978). Still, whenever I remember that Barbie will be a film about a billion-dollar toy and media franchise with a nettlesome history, I recall I rubbished the "Facebook film" that turned into The Social Network (2010). Anyway, the trailer for Barbie is worth watching, if only because it seems like a parody of itself.
Blitz It's difficult to overstate just how important the aerial bombing of London during World War II is crucial to understanding the British psyche, despite it being a constructed phenomenon from the outset. Without wishing to underplay the deaths of over 40,000 civilian deaths, Angus Calder pointed out in the 1990s that the modern mythology surrounding the event "did not evolve spontaneously; it was a propaganda construct directed as much at [then neutral] American opinion as at British." It will therefore be interesting to see how British Grenadian Trinidadian director Steve McQueen addresses a topic so essential to the British self-conception. (Remember the controversy in right-wing circles about the sole Indian soldier in Christopher Nolan's Dunkirk (2017)?) McQueen is perhaps best known for his 12 Years a Slave (2013), but he recently directed a six-part film anthology for the BBC which addressed the realities of post-Empire immigration to Britain, and this leads me to suspect he sees the Blitz and its surrounding mythology with a more critical perspective. But any attempt to complicate the story of World War II will be vigorously opposed in a way that will make the recent hullabaloo surrounding The Crown seem tame. All this is to say that the discourse surrounding this release may be as interesting as the film itself.
Dune, Part II Coming out of the cinema after the first part of Denis Vileneve's adaptation of Dune (2021), I was struck by the conception that it was less of a fresh adaptation of the 1965 novel by Frank Herbert than an attempt to rehabilitate David Lynch's 1984 version and in a broader sense, it was also an attempt to reestablish the primacy of cinema over streaming TV and the myriad of other distractions in our lives. I must admit I'm not a huge fan of the original novel, finding within it a certain prurience regarding hereditary military regimes and writing about them with a certain sense of glee that belies a secret admiration for them... not to mention an eyebrow-raising allegory for the Middle East. Still, Dune, Part II is going to be a fantastic spectacle.
Ferrari It'll be curious to see how this differs substantially from the recent Ford v Ferrari (2019), but given that Michael Mann's Heat (1995) so effectively re-energised the gangster/heist genre, I'm more than willing to kick the tires of this about the founder of the eponymous car manufacturer. I'm in the minority for preferring Mann's Thief (1981) over Heat, in part because the former deals in more abstract themes, so I'd have perhaps prefered to look forward to a more conceptual film from Mann over a story about one specific guy.
How Do You Live There are a few directors one can look forward to watching almost without qualification, and Hayao Miyazaki (My Neighbor Totoro, Kiki's Delivery Service, Princess Mononoke Howl's Moving Castle, etc.) is one of them. And this is especially so given that The Wind Rises (2013) was meant to be the last collaboration between Miyazaki and Studio Ghibli. Let's hope he is able to come out of retirement in another ten years.
Indiana Jones and the Dial of Destiny Given I had a strong dislike of Indiana Jones and the Kingdom of the Crystal Skull (2008), I seriously doubt I will enjoy anything this film has to show me, but with 1981's Raiders of the Lost Ark remaining one of my most treasured films (read my brief homage), I still feel a strong sense of obligation towards the Indiana Jones name, despite it feeling like the copper is being pulled out of the walls of this franchise today.
Kafka I only know Polish filmmaker Agnieszka Holland through her Spoor (2017), an adaptation of Olga Tokarczuk's 2009 eco-crime novel Drive Your Plow Over the Bones of the Dead. I wasn't an unqualified fan of Spoor (nor the book on which it is based), but I am interested in Holland's take on the life of Czech author Franz Kafka, an author enmeshed with twentieth-century art and philosophy, especially that of central Europe. Holland has mentioned she intends to tell the story "as a kind of collage," and I can hope that it is an adventurous take on the over-furrowed biopic genre. Or perhaps Gregor Samsa will awake from uneasy dreams to find himself transformed in his bed into a huge verminous biopic.
The Killer It'll be interesting to see what path David Fincher is taking today, especially after his puzzling and strangely cold Mank (2020) portraying the writing process behind Orson Welles' Citizen Kane (1941). The Killer is said to be a straight-to-Netflix thriller based on the graphic novel about a hired assassin, which makes me think of Fincher's Zodiac (2007), and, of course, Se7en (1995). I'm not as entranced by Fincher as I used to be, but any film with Michael Fassbender and Tilda Swinton (with a score by Trent Reznor) is always going to get my attention.
Killers of the Flower Moon In Killers of the Flower Moon, Martin Scorsese directs an adaptation of a book about the FBI's investigation into a conspiracy to murder Osage tribe members in the early years of the twentieth century in order to deprive them of their oil-rich land. (The only thing more quintessentially American than apple pie is a conspiracy combined with a genocide.) Separate from learning more about this disquieting chapter of American history, I'd love to discover what attracted Scorsese to this particular story: he's one of the few top-level directors who have the ability to lucidly articulate their intentions and motivations.
Napoleon It often strikes me that, despite all of his achievements and fame, it's somehow still possible to claim that Ridley Scott is relatively underrated compared to other directors working at the top level today. Besides that, though, I'm especially interested in this film, not least of all because I just read Tolstoy's War and Peace (read my recent review) and am working my way through the mind-boggling 431-minute Soviet TV adaptation, but also because several auteur filmmakers (including Stanley Kubrick) have tried to make a Napoleon epic and failed.
Oppenheimer In a way, a biopic about the scientist responsible for the atomic bomb and the Manhattan Project seems almost perfect material for Christopher Nolan. He can certainly rely on stars to queue up to be in his movies (Robert Downey Jr., Matt Damon, Kenneth Branagh, etc.), but whilst I'm certain it will be entertaining on many fronts, I fear it will fall into the well-established Nolan mould of yet another single man struggling with obsession, deception and guilt who is trying in vain to balance order and chaos in the world.
The Way of the Wind Marked by philosophical and spiritual overtones, all of Terrence Malick's films are perfumed with themes of transcendence, nature and the inevitable conflict between instinct and reason. My particular favourite is his stunning Days of Heaven (1978), but The Thin Red Line (1998) and A Hidden Life (2019) also touched me ways difficult to relate, and are one of the few films about the Second World War that don't touch off my sensitivity about them (see my remarks about Blitz above). It is therefore somewhat Malickian that his next film will be a biblical drama about the life of Jesus. Given Malick's filmography, I suspect this will be far more subdued than William Wyler's 1959 Ben-Hur and significantly more equivocal in its conviction compared to Paolo Pasolini's ardently progressive The Gospel According to St. Matthew (1964). However, little beyond that can be guessed, and the film may not even appear until 2024 or even 2025.
Zone of Interest I was mesmerised by Jonathan Glazer's Under the Skin (2013), and there is much to admire in his borderline 'revisionist gangster' film Sexy Beast (2000), so I will definitely be on the lookout for this one. The only thing making me hesitate is that Zone of Interest is based on a book by Martin Amis about a romance set inside the Auschwitz concentration camp. I haven't read the book, but Amis has something of a history in his grappling with the history of the twentieth century, and he seems to do it in a way that never sits right with me. But if Paul Verhoeven's Starship Troopers (1997) proves anything at all, it's all in the adaption.
sbuild
with mmdebstrap
and apt-cacher-ng
.
The usual tool for building Debian packages is dpkg-buildpackage
, or a user-friendly wrapper like debuild
, and while these are geat tools, if you want to upload something to the Debian archive they lack the required separation from the system they are run on to ensure that your packaging also works on a different system. The usual candidate here is sbuild
. But setting up a schroot
is tedious and performance tuning can be annoying. There is an alternative backend for sbuild
that promises to make everything simpler: unshare
. In this tutorial I will show you how to set up sbuild
with this backend.
Additionally to the normal performance tweaking, caching downloaded packages can be a huge performance increase when rebuilding packages. I do rebuilds quite often, mostly when a new dependency got introduced I didn t specify in debian/control
yet or lintian
notices a something I can easily fix. So let s begin with setting up this caching.
apt-cacher-ng
:
sudo apt install apt-cacher-ng
A pop-up will appear, if you are unsure how to answer it select no, we don t need it for this use-case.
To enable apt-cacher-ng
on your system, create /etc/apt/apt.conf.d/02proxy
and insert:
Acquire::http::proxy "http://127.0.0.1:3142";
Acquire::https::proxy "DIRECT";
In /etc/apt-cacher-ng/acng.conf
you can increase the value of ExThreshold
to hold packages for a shorter or longer duration.
The length depends on your specific use case and resources. A longer threshold takes more disk space, a short threshold like one day effecitvely only reduces the build time for rebuilds.
If you encounter weird issues on apt update
at some point the future, you can try to clean the cache from apt-cacher-ng
.
You can use this script:
mmdebstrap
:
sudo apt install mmdebstrap
We will create a small helper script to ease creating a chroot. Open ~/.local/bin/mmupdate
and insert:
#!/bin/sh
mmdebstrap \
--variant=buildd \
--aptopt='Acquire::http::proxy "http://127.0.0.1:3142";' \
--arch=amd64 \
--components=main,contrib,non-free \
unstable \
~/.cache/sbuild/unstable-amd64.tar.xz \
http://deb.debian.org/debian
Notes:
aptopt
enables apt-cacher-ng
inside the chroot.--arch
sets the CPU architecture (see Debian Wiki).--components
sets the archive components, if you don t want non-free pacakges you might want to remove some entries here.unstable
sets the Debian release, you can also set for example bookworm-backports
here.unstable-amd64.tar.xz
is the output tarball containing the chroot, change accordingly to your pick of the CPU architecture and Debian release.http://deb.debian.org/debian
is the Debian mirror, you should set this to the same one you use in your /etc.apt/sources.list
.mmupdate
executable and run it once:
chmod +x ~/.local/bin/mmupdate
mkdir -p ~/.cache/sbuild
~/.local/bin/mmupdate
If you execute mmupdate
again you can see that the downloading stage is much faster thanks to apt-cacher-ng
. For me the difference is from about 115s to about 95s. Your results may vary, this depends on the speed of your internet, Debian mirror and disk.
If you have used the schroot
backend and sbuild-update
before, you probably notice that creating a new chroot with mmdebstrap
is slower. It would be a bit annoying to do this manually before we start a new Debian packaging session, so let s create a systemd service that does this for us.
First create a folder for user services:
mkdir -p ~/.config/systemd/user
Create ~/.config/systemd/user/mmupdate.service
and add:
[Unit]
Description=Run mmupdate
Wants=network-online.target
[Service]
Type=oneshot
ExecStart=%h/.local/bin/mmupdate
Start the service and test that it works:
systemctl --user daemon-reload
systemctl --user start mmupdate
systemctl --user status mmupdate
Create ~/.config/systemd/user/mmupdate.timer
:
[Unit]
Description=Run mmupdate daily
[Timer]
OnCalendar=daily
Persistent=true
[Install]
WantedBy=timers.target
Enable the timer:
systemctl --user enable mmupdate.timer
Now every day mmupdte
will be run automatically. You can adjust the period if you think daily rebuilds are a bit excessive.
A neat advantage of period rebuilds is that they the base files in your apt-cacher-ng
cache warm every time they run.
sbuild
and (optionally) autopkgtest
:
sudo apt install --no-install-recommends sbuild autopkgtest
Create ~/.sbuildrc
and insert:
# backend for using mmdebstrap chroots
$chroot_mode = 'unshare';
# build in tmpfs
$unshare_tmpdir_template = '/dev/shm/tmp.sbuild.XXXXXXXX';
# upgrade before starting build
$apt_update = 1;
$apt_upgrade = 1;
# build everything including source for source-only uploads
$build_arch_all = 1;
$build_arch_any = 1;
$build_source = 1;
$source_only_changes = 1;
# go to shell on failure instead of exiting
$external_commands = "build-failed-commands" => [ [ '%SBUILD_SHELL' ] ] ;
# always clean build dir, even on failure
$purge_build_directory = "always";
# run lintian
$run_lintian = 1;
$lintian_opts = [ '-i', '-I', '-E', '--pedantic' ];
# do not run piuparts
$run_piuparts = 0;
# run autopkgtest
$run_autopkgtest = 1;
$autopkgtest_root_args = '';
$autopkgtest_opts = [ '--apt-upgrade', '--', 'unshare', '--release', '%r', '--arch', '%a', '--prefix=/dev/shm/tmp.autopkgtest.' ];
# set uploader for correct signing
$uploader_name = 'Stephan Lachnit <stephanlachnit@debian.org>';
You should adjust uploader_name
. If you don t want to run autopkgtest
or lintian
by default you can also disable it here. Note that for packages that need a lot of space for building, you might want to comment the unshare_tmpdir_template
line to prevent a OOM build failure.
You can now build your Debian packages with the sbuild
command :)
~/.bashrc
as bonus (with adjusted name / email):
export DEBFULLNAME="<your_name>"
export DEBEMAIL="<your_email>"
export DEB_BUILD_OPTIONS="parallel=<threads>"
In particular adjust the value of parallel
to ensure parallel builds.
If you are new to signing / uploading your package, first install the required tools:
sudo apt install devscripts dput-ng
Create ~/.devscripts
and insert:
DEBSIGN_KEYID=<your_gpg_fingerpring>
USCAN_SYMLINK=rename
You can now sign the .changes
file with:
debsign ../<pkgname_version_arch>.changes
And for source-only uploads with:
debsign -S ../<pkgname_version_arch>_source.changes
If you don t introduce a new binary package, you always want to go with source-only changes.
You can now upload the package to Debian with
dput ../<filename>.changes
--include=auto-apt-proxy
instead of the --aptopt
option in mmdebstrap to detect apt proxies automatically.
He also let me know that it is possible to use autopkgtest on tmpfs (config in the blog post is updated) and added an entry on the sbuild wiki page on how to setup sbuild+unshare with ccache if you often need to build a large package.
Further, using --variant=apt
and --include=build-essential
will produce smaller build chroots if wished. On the contrary, one can of course also use the --include
option to include debhelper
and lintian
(or any other packages you like) to further decrease the setup time. However, staying with buildd
variant is a good choice for official uploads.
retroarch-dev
binary package that contains the appropriate include files as part of retroarch
). It sat in NEW for a while (including an initial reject because I d missed an attribution in the debian/copyright
), and was accepted yesterday.
There s a final piece of the puzzle, and that s the Kodi config that ties together the libretro core with game.libretro and presents the emulator to Kodi as a fully fledged add-on. The kodi-game folk have a neat tool, kodi-game-scripting which automates the heavy lifting of producing this config. I ve done some local modifications that make it bit more useful for producing config that can be embedded in the Debian libretro-*
packages directly, which I should upload somewhere but it s all a bit rough n ready at present. It was enough to allow me to produce some kodi-game-libretro-bsnes-*
packages as part of libretro-bsnes-mercury
.
With that complete, all the packages needed for playing SNES games under Kodi are now present in Debian. I need to upload libretro-bsnes-mercury
to unstable (it went to experimental while waiting for kodi-game-libretro
to be accepted), and kodi-game-libretro
needs another source-only upload, but once that s done both should be in good shape to migrate to testing and be part of the upcoming bookworm release.
What else is there to do? I d like to get Kodi config included in the other libretro packages that are already part of Debian. That s going to need the Controller Topology Project to be packaged so that the controller details are available (I was lucky in that the SNES controller is already part of the Kodi package). I need to work out if I can turn kodi-game-scripting
into some sort of dh
helper to help automate things. But I ve done some local testing with genesisplusgx
and it works fine as expected.
The other thing is that games are not yet first class citizens in Kodi; the normal browser interface you get for movies, music and TV shows is not available for games. Currently I ve been trying out the ROM Collection Browser though I find its automated scraping isn t as good as I d like. A friend has recommended the Advanced Emulator Launcher but I haven t taken a look at it. Either way I d like to ultimately get one of them packaged up as well, though not in time for bookworm.
Anyway. My hope is that these updated and new packages prove useful to someone else. You can tell I m biased towards 90s era consoles, but if you ve enough CPU grunt there are a bunch of more recent cores available too. Big thanks to the Debian FTP Master team for letting these through NEW so close to release. And all the upstream devs - RetroArch is a great framework from my perspective as a user, and the Kodi Game folk have done massive amounts of work that made my life much easier when preparing things for Debian.
#!/bin/sh # # Author: Petter Reinholdtsen # License: GPL v2 or later at your choice. # # Validate the MIME setup, making sure motor types have # application/vnd.openmotor+yaml associated with them and is connected # to the openmotor desktop file. retval=0 mimetype="application/vnd.openmotor+yaml" testfile="test/data/real/o3100/motor.ric" mydesktopfile="openmotor.desktop" filemime="$(xdg-mime query filetype "$testfile")" if [ "$mimetype" != "$filemime" ] ; then retval=1 echo "error: xdg-mime claim motor file MIME type is $filemine, not $mimetype" else echo "success: xdg-mime report correct mime type $mimetype for motor file" fi desktop=$(xdg-mime query default "$mimetype") if [ "$mydesktopfile" != "$desktop" ]; then retval=1 echo "error: xdg-mime claim motor file should be handled by $desktop, not $mydesktopfile" else echo "success: xdg-mime agree motor file should be handled by $mydesktopfile" fi exit $retvalIt is a simple way to ensure your users are not very surprised when they try to open one of your file formats in their file browser. As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.
BEGIN:VEVENT
DTSTART;TZID=Australia/Sydney:20230206T000000
DTEND;TZID=Australia/Sydney:20230206T000000
SUMMARY:School Term starts
END:VEVENT
The event starting and stopping date and time are the DTSTART and DTEND lines. Both of them have the date of 2023/02/06 or 6th February 2023 and a time of 00:00:00 or midnight. So the calendar is doing the right thing, we need to fix the feed!
The Fix
I wrote a quick and dirty PHP script to download the feed from the real site, change the DTSTART and DTEND lines to all-day events and leave the rest of it alone.
<?php $site = $_GET['s']; if ($site == 'site1') $REMOTE_URL='https://site1.example.net/ical_feed'; elseif ($site == 'site2') $REMOTE_URL='https://site2.example.net/ical_feed'; else http_response_code(400); die(); $fp = fopen($REMOTE_URL, "r"); if (!$fp) die("fopen"); header('Content-Type: text/calendar'); while (( $line = fgets($fp, 1024)) !== false) $line = preg_replace( '/^(DTSTART DTEND);[^:]+:([0-9] 8 )T000[01]00/', '$ 1 ;VALUE=DATE:$ 2 ', $line); echo $line; ?>It s pretty quick and nasty but gets the job done. So what is it doing?
s
and match it to either site1 or site2 to obtain the URL. If you only had one site to fix you could just set the REMOTE_URL
variable.fopen()
and nasty error handling.while
loop to read the contents of the remote site line by line.preg_replace
is a Perl regular expression replacement. The PCRE is:
BEGIN:VEVENT
DTSTART;VALUE=DATE:20230206
DTEND;VALUE=DATE:20230206
SUMMARY:School Term starts
END:VEVENT
The calendar then shows it properly as an all-day event. I would check the script works before doing the next step. You can use things like curl or wget to download it. If you use a normal browser, it will probably just download the translated file.
If you re not seeing the right thing then it s probably the PCRE failing. You can check it online with a regex checker such as https://regex101.com. The site has saved my PCRE and match so you got something to start with.
Calendar settings
The last thing to do is to change the URL in your calendar settings. Each calendar system has a different way of doing it. For Google Calendar they provide instructions and you want to follow the section titled Use a link to add a public Calendar .
The URL here is not the actual site s URL (which you would have put into the REMOTE_URL variable before) but the URL of your script plus the ?s=site1 part. So if you put your script aliased to /myical.php and the site ID was site1 and your website is www.example.com the URL would be https://www.example.com/myical.php?s=site1 .
You should then see the events appear as all-day events on your calendar.
Series: | Innkeeper Chronicles #6 |
Publisher: | NYLA Publishing |
Copyright: | 2022 |
ISBN: | 1-64197-239-4 |
Format: | Kindle |
Pages: | 440 |
Ok
-wrapping as needed in today s Rust is a significant distraction, because there are multiple ways to do it. They are all slightly awkward in different ways, so are least-bad in different situations. You must choose a way for every fallible function, and sometimes change a function from one pattern to another.
Rust really needs #[throws]
as a first-class language feature. Code using #[throws]
is simpler and clearer.
Please try out withoutboats s fehler
. I think you will like it.
Contents
Ok
wrapping? Intro to Rust error handlingOk
wrapping is worse than code using #[throws]
fehler
, I have been using it in most of my personal projects.
For Reasons I recently had a go at eliminating the dependency on fehler
from Hippotat. So, I made a branch, deleted the dependency and imports, and started on the whack-a-mole with the compiler errors.
After about a half hour of this, I was starting to feel queasy.
After an hour I had decided that basically everything I was doing was making the code worse. And, bizarrely, I kept having to make individual decisons about what idiom to use in each place. I couldn t face it any more.
After sleeping on the question I decided that Hippotat would be in Debian with fehler
, or not at all. Happily the Debian Rust Team generously helped me out, so the answer is that fehler
is now in Debian, so it s fine.
For me this experience, of trying to convert Rust-with-#[throws]
to Rust-without-#[throws
] brought the Ok
wrapping problem into sharp focus.
What is Ok
wrapping? Intro to Rust error handling
(You can skip this section if you re already a seasoned Rust programer.)
In Rust, fallibility is represented by functions that return Result<SuccessValue, Error>
: this is a generic type, representing either whatever SuccessValue
is (in the Ok
variant of the data-bearing enum) or some Error
(in the Err
variant). For example, std::fs::read_to_string
, which takes a filename and returns the contents of the named file, returns Result<String, std::io::Error>
.
This is a nice and typesafe formulation of, and generalisation of, the traditional C practice, where a function indicates in its return value whether it succeeded, and errors are indicated with an error code.
Result
is part of the standard library and there are convenient facilities for checking for errors, extracting successful results, and so on. In particular, Rust has the postfix ?
operator, which, when applied to a Result
, does one of two things: if the Result
was Ok
, it yields the inner successful value; if the Result
was Err
, it returns early from the current function, returning an Err
in turn to the caller.
This means you can write things like this:
let input_data = std::fs::read_to_string(input_file)?;
and the error handling is pretty automatic. You get a compiler warning, or a type error, if you forget the ?
, so you can t accidentally ignore errors.
But, there is a downside. When you are returning a successful outcome from your function, you must convert it into a Result
. After all, your fallible function has return type Result<SuccessValue, Error>
, which is a different type to SuccessValue
. So, for example, inside std::fs::read_to_string
, we see this:
let mut string = String::new();
file.read_to_string(&mut string)?;
Ok(string)
string
has type String
; fs::read_to_string
must return Result<String, ..>
, so at the end of the function we must return Ok(string)
. This applies to return
statements, too: if you want an early successful return from a fallible function, you must write return Ok(whatever)
.
This is particularly annoying for functions that don t actually return a nontrivial value. Normally, when you write a function that doesn t return a value you don t write the return type. The compiler interprets this as syntactic sugar for -> ()
, ie, that the function returns ()
, the empty tuple, used in Rust as a dummy value in these kind of situations. A block ( ...
) whose last statement ends in a ;
has type ()
. So, when you fall off the end of a function, the return value is ()
, without you having to write it. So you simply leave out the stuff in your program about the return value, and your function doesn t have one (i.e. it returns ()
).
But, a function which either fails with an error, or completes successfuly without returning anything, has return type Result<(), Error>
. At the end of such a function, you must explicitly provide the success value. After all, if you just fall off the end of a block, it means the block has value ()
, which is not of type Result<(), Error>
. So the fallible function must end with Ok(())
, as we see in the example for std::fs::read_to_string
.
A minor inconvenience, or a significant distraction?
I think the need for Ok
-wrapping on all success paths from fallible functions is generally regarded as just a minor inconvenience. Certainly the experienced Rust programmer gets very used to it. However, while trying to remove fehler
s #[throws]
from Hippotat, I noticed something that is evident in codebases using vanilla Rust (without fehler
) but which goes un-remarked.
There are multiple ways to write the Ok
-wrapping, and the different ways are appropriate in different situations.
See the following examples, all taken from a real codebase. (And it s not just me: I do all of these in different places, - when I don t have fehler
available - but all these examples are from code written by others.)
Idioms for Ok
-wrapping - a bestiary
Wrap just a returned variable binding
If you have the return value in a variable, you can write Ok(reval)
at the end of the function, instead of retval
.
pub fn take_until(&mut self, term: u8) -> Result<&'a [u8]>
// several lines of code
Ok(result)
If the returned value is not already bound to variable, making a function fallible might mean choosing to bind it to a variable.
Wrap a nontrivial return expression
Even if it s not just a variable, you can wrap the expression which computes the returned value. This is often done if the returned value is a struct literal:
fn take_from(r: &mut Reader<'_>) -> Result<Self>
// several lines of code
Ok(AuthChallenge challenge, methods )
Introduce Ok(())
at the end
For functions returning Result<()>
, you can write Ok(())
.
This is usual, but not ubiquitous, since sometimes you can omit it.
Wrap the whole body
If you don t have the return value in a variable, you can wrap the whole body of the function in Ok(
)
. Whether this is a good idea depends on how big and complex the body is.
fn from_str(s: &str) -> std::result::Result<Self, Self::Err>
Ok(match s
"Authority" => RelayFlags::AUTHORITY,
// many other branches
_ => RelayFlags::empty(),
)
Omit the wrap when calling fallible sub-functions
If your function wraps another function call of the same return and error type, you don t need to write the Ok
at all. Instead, you can simply call the function and not apply ?
.
You can do this even if your function selects between a number of different sub-functions to call:
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result
if flags::unsafe_logging_enabled()
std::fmt::Display::fmt(&self.0, f)
else
self.0.display_redacted(f)
But this doesn t work if the returned error type isn t the same, but needs the autoconversion implied by the ?
operator.
Convert a fallible sub-function error with Ok( ... ?)
If the final thing a function does is chain to another fallible function, but with a different error type, the error must be converted somehow. This can be done with ?
.
fn try_from(v: i32) -> Result<Self, Error>
Ok(Percentage::new(v.try_into()?))
Convert a fallible sub-function error with .map_err
Or, rarely, people solve the same problem by converting explicitly with .map_err
:
pub fn create_unbootstrapped(self) -> Result<TorClient<R>>
// several lines of code
TorClient::create_inner(
// several parameters
)
.map_err(ErrorDetail::into)
What is to be done, then?
The fehler
library is in excellent taste and has the answer. With fehler
:
return
, are automatically wrapped up in Ok
. So the body of a fallible function is just like the body of an infallible one, except for places where error handling is actually involved.?
error chaining, and with a new explicit syntax for error return.Result
unless we need to.fehler
provides:
#[throws(ErrorType)]
to make a function fallible in this way.throws!(error)
for explicitly failing.Ok
-wrapping, since it s automatic rather than explicitly written out. write!(...)?;
vs write!(...)
in a formatter) does not depend on whether the error needs converting, how complex the body is, and whether the final expression in the function is itself fallible.#[throws]
to its definition, and ?
to its call sites. One does not need to edit the body, or the return type.Result
alias shadowing std::result::Result
, which means that when one needs to speak of Result
explciitly, the code is clearer.#[throws]
on a closure.fehler
so sometimes return
statements inside macro calls are untreated. This will lead to a type error, so isn t a correctness hazard, but it can be nuisance if you like other syntax extensions eg if_chain
.#[must_use] #[throws(Error)] fn obtain() -> Thing;
ought to mean that Thing
must be used, not the Result<Thing, Error>
.#[throws]
is so much nicer a language than Rust-with-mandatory-Ok
-wrapping, that these are minor inconveniences.
Please can we have #[throws]
in the Rust language
This ought to be part of the language, not a macro library. In the compiler, it would be possible to get the all the corner cases right. It would make the feature available to everyone, and it would quickly become idiomatic Rust throughout the community.
It is evident from reading writings from the time, particularly those from withoutboats, that there were significant objections to automatic Ok
-wrapping. It seems to have become quite political, and some folks burned out on the topic.
Perhaps, now, a couple of years later, we can revisit this area and solve this problem in the language itself ?
Explicitness
An argument I have seen made against automatic Ok
-wrapping, and, in general, against any kind of useful language affordance, is that it makes things less explicit.
But this argument is fundamentally wrong for Ok
-wrapping. Explicitness is not an unalloyed good. We humans have only limited attention. We need to focus that attention where it is actually needed. So explicitness is good in situtions where what is going on is unusual; or would otherwise be hard to read; or is tricky or error-prone. Generally: explicitness is good for things where we need to direct humans attention.
But Ok
-wrapping is ubiquitous in fallible Rust code. The compiler mechanisms and type systems almost completely defend against mistakes. All but the most novice programmer knows what s going on, and the very novice programmer doesn t need to. Rust s error handling arrangments are designed specifically so that we can avoid worrying about fallibility unless necessary except for the Ok
-wrapping. Explicitness about Ok
-wrapping directs our attention away from whatever other things the code is doing: it is a distraction.
So, explicitness about Ok
-wrapping is a bad thing.
Appendix - examples showning code with Ok
wrapping is worse than code using #[throws]
Observe these diffs, from my abandoned attempt to remove the fehler
dependency from Hippotat.
I have a type alias AE
for the usual error type (AE
stands for anyhow::Error
). In the non-#[throws]
code, I end up with a type alias AR<T>
for Result<T, AE>
, which I think is more opaque but at least that avoids typing out -> Result< , AE>
a thousand times. Some people like to have a local Result
alias, but that means that the standard Result
has to be referred to as StdResult
or std::result::Result
.
With fehler and #[throws]
|
Vanilla Rust, Result<> , mandatory Ok-wrapping
|
---|---|
|
|
Return value clearer, error return less wordy: | |
impl Parseable for Secret
|
impl Parseable for Secret
|
#[throws(AE)]
|
|
fn parse(s: Option<&str>) -> Self
|
fn parse(s: Option<&str>) -> AR<Self>
|
let s = s.value()?;
|
let s = s.value()?;
|
if s.is_empty() throw!(anyhow!( secret value cannot be empty ))
|
if s.is_empty() return Err(anyhow!( secret value cannot be empty ))
|
Secret(s.into())
|
Ok(Secret(s.into()))
|
|
|
|
|
No need to wrap whole match statement in Ok( ):
| |
#[throws(AE)]
|
|
pub fn client<T>(&self, key: & static str, skl: SKL) -> T
|
pub fn client<T>(&self, key: & static str, skl: SKL) -> AR<T>
|
where T: Parseable + Default
|
where T: Parseable + Default
|
match self.end
|
Ok(match self.end
|
LinkEnd::Client => self.ordinary(key, skl)?,
|
LinkEnd::Client => self.ordinary(key, skl)?,
|
LinkEnd::Server => default(),
|
LinkEnd::Server => default(),
|
|
)
|
|
|
Return value and Ok(()) entirely replaced by #[throws] :
| |
impl Display for Loc
|
impl Display for Loc
|
#[throws(fmt::Error)]
|
|
fn fmt(&self, f: &mut fmt::Formatter)
|
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result
|
write!(f, :? : , &self.file, self.lno)?;
|
write!(f, :? : , &self.file, self.lno)?;
|
if let Some(s) = &self.section
|
if let Some(s) = &self.section
|
write!(f, )?;
|
write!(f, )?;
|
|
|
|
|
|
Ok(())
|
|
|
Call to write! now looks the same as in more complex case shown above:
| |
impl Debug for Secret
|
impl Debug for Secret
|
#[throws(fmt::Error)]
|
|
fn fmt(&self, f: &mut fmt::Formatter)
|
fn fmt(&self, f: &mut fmt::Formatter)-> fmt::Result
|
write!(f, "Secret(***)")?;
|
write!(f, "Secret(***)")
|
|
|
|
|
Much tiresome return Ok() noise removed:
| |
impl FromStr for SectionName
|
impl FromStr for SectionName
|
type Err = AE;
|
type Err = AE;
|
#[throws(AE)]
|
|
fn from_str(s: &str) -> Self
|
fn from_str(s: &str) ->AR< Self>
|
match s
|
match s
|
COMMON => return SN::Common,
|
COMMON => return Ok(SN::Common),
|
LIMIT => return SN::GlobalLimit,
|
LIMIT => return Ok(SN::GlobalLimit),
|
_ =>
|
_ =>
|
;
|
;
|
if let Ok(n@ ServerName(_)) = s.parse() return SN::Server(n)
|
if let Ok(n@ ServerName(_)) = s.parse() return Ok(SN::Server(n))
|
if let Ok(n@ ClientName(_)) = s.parse() return SN::Client(n)
|
if let Ok(n@ ClientName(_)) = s.parse() return Ok(SN::Client(n))
|
|
|
if client == LIMIT return SN::ServerLimit(server)
|
if client == LIMIT return Ok(SN::ServerLimit(server))
|
let client = client.parse().context( client name in link section name )?;
|
let client = client.parse().context( client name in link section name )?;
|
SN::Link(LinkName server, client )
|
Ok(SN::Link(LinkName server, client ))
|
|
|
|
|
Next.